524 research outputs found

    SLIC Based Digital Image Enlargement

    Full text link
    Low resolution image enhancement is a classical computer vision problem. Selecting the best method to reconstruct an image to a higher resolution with the limited data available in the low-resolution image is quite a challenge. A major drawback from the existing enlargement techniques is the introduction of color bleeding while interpolating pixels over the edges that separate distinct colors in an image. The color bleeding causes to accentuate the edges with new colors as a result of blending multiple colors over adjacent regions. This paper proposes a novel approach to mitigate the color bleeding by segmenting the homogeneous color regions of the image using Simple Linear Iterative Clustering (SLIC) and applying a higher order interpolation technique separately on the isolated segments. The interpolation at the boundaries of each of the isolated segments is handled by using a morphological operation. The approach is evaluated by comparing against several frequently used image enlargement methods such as bilinear and bicubic interpolation by means of Peak Signal-to-Noise-Ratio (PSNR) value. The results obtained exhibit that the proposed method outperforms the baseline methods by means of PSNR and also mitigates the color bleeding at the edges which improves the overall appearance.Comment: 6 page

    Imaging and Dynamics of Light Atoms and Molecules on Graphene

    Full text link
    Observing the individual building blocks of matter is one of the primary goals of microscopy. The invention of the scanning tunneling microscope [1] revolutionized experimental surface science in that atomic-scale features on a solid-state surface could finally be readily imaged. However, scanning tunneling microscopy has limited applicability due to restrictions, for example, in sample conductivity, cleanliness, and data aquisition rate. An older microscopy technique, that of transmission electron microscopy (TEM) [2, 3] has benefited tremendously in recent years from subtle instrumentation advances, and individual heavy (high atomic number) atoms can now be detected by TEM [4 - 7] even when embedded within a semiconductor material [8, 9]. However, detecting an individual low atomic number atom, for example carbon or even hydrogen, is still extremely challenging, if not impossible, via conventional TEM due to the very low contrast of light elements [2, 3, 10 - 12]. Here we demonstrate a means to observe, by conventional transmision electron microscopy, even the smallest atoms and molecules: On a clean single-layer graphene membrane, adsorbates such as atomic hydrogen and carbon can be seen as if they were suspended in free space. We directly image such individual adatoms, along with carbon chains and vacancies, and investigate their dynamics in real time. These techniques open a way to reveal dynamics of more complex chemical reactions or identify the atomic-scale structure of unknown adsorbates. In addition, the study of atomic scale defects in graphene may provide insights for nanoelectronic applications of this interesting material.Comment: 9 pages manuscript and figures, 9 pages supplementary informatio

    The role of the chemokine receptor CXCR4 in infection with feline immunodeficiency virus

    Get PDF
    Infection with feline immunodeficiency virus (FIV) leads to the development of a disease state similar to AIDS in man. Recent studies have identified the chemokine receptor CXCR4 as the major receptor for cell culture-adapted strains of FIV, suggesting that FIV and human immunodeficiency virus (HIV) share a common mechanism of infection involving an interaction between the virus and a member of the seven transmembrane domain superfamily of molecules. This article reviews the evidence for the involvement of chemokine receptors in FIV infection and contrasts these findings with similar studies on the primate lentiviruses HIV and SIV (simian immunodeficiency virus)

    Weak lensing, dark matter and dark energy

    Full text link
    Weak gravitational lensing is rapidly becoming one of the principal probes of dark matter and dark energy in the universe. In this brief review we outline how weak lensing helps determine the structure of dark matter halos, measure the expansion rate of the universe, and distinguish between modified gravity and dark energy explanations for the acceleration of the universe. We also discuss requirements on the control of systematic errors so that the systematics do not appreciably degrade the power of weak lensing as a cosmological probe.Comment: Invited review article for the GRG special issue on gravitational lensing (P. Jetzer, Y. Mellier and V. Perlick Eds.). V3: subsection on three-point function and some references added. Matches the published versio

    Thermodynamic analysis of humidification dehumidification desalination cycles

    Get PDF
    Humidification–dehumidification desalination (HDH) is a promising technology for small-scale water production applications. There are several embodiments of this technology which have been investigated by researchers around the world. However, from a previous literature [1], we have found that no study carried out a detailed thermodynamic analysis in order to improve and/ or optimize the system performance. In this paper, we analyze the thermodynamic performance of various HDH cycles by way of a theoretical cycle analysis. In addition, we propose novel high performance variations on those cycles. These high-performance cycles include multi-extraction, multi-pressure and thermal vapor compression cycles. It is predicted that the systems based on these novel cycles will have gained output ratio in excess of 5 and will outperform existing HDH systems.King Fahd University of Petroleum and MineralsCenter for Clean Water and Clean Energy at MIT and KFUP

    Euclid Preparation. TBD. Impact of magnification on spectroscopic galaxy clustering

    Get PDF
    In this paper we investigate the impact of lensing magnification on the analysis of Euclid's spectroscopic survey, using the multipoles of the 2-point correlation function for galaxy clustering. We determine the impact of lensing magnification on cosmological constraints, and the expected shift in the best-fit parameters if magnification is ignored. We consider two cosmological analyses: i) a full-shape analysis based on the Λ\LambdaCDM model and its extension w0waw_0w_aCDM and ii) a model-independent analysis that measures the growth rate of structure in each redshift bin. We adopt two complementary approaches in our forecast: the Fisher matrix formalism and the Markov chain Monte Carlo method. The fiducial values of the local count slope (or magnification bias), which regulates the amplitude of the lensing magnification, have been estimated from the Euclid Flagship simulations. We use linear perturbation theory and model the 2-point correlation function with the public code coffe. For a Λ\LambdaCDM model, we find that the estimation of cosmological parameters is biased at the level of 0.4-0.7 standard deviations, while for a w0waw_0w_aCDM dynamical dark energy model, lensing magnification has a somewhat smaller impact, with shifts below 0.5 standard deviations. In a model-independent analysis aiming to measure the growth rate of structure, we find that the estimation of the growth rate is biased by up to 1.21.2 standard deviations in the highest redshift bin. As a result, lensing magnification cannot be neglected in the spectroscopic survey, especially if we want to determine the growth factor, one of the most promising ways to test general relativity with Euclid. We also find that, by including lensing magnification with a simple template, this shift can be almost entirely eliminated with minimal computational overhead

    Properties of Graphene: A Theoretical Perspective

    Full text link
    In this review, we provide an in-depth description of the physics of monolayer and bilayer graphene from a theorist's perspective. We discuss the physical properties of graphene in an external magnetic field, reflecting the chiral nature of the quasiparticles near the Dirac point with a Landau level at zero energy. We address the unique integer quantum Hall effects, the role of electron correlations, and the recent observation of the fractional quantum Hall effect in the monolayer graphene. The quantum Hall effect in bilayer graphene is fundamentally different from that of a monolayer, reflecting the unique band structure of this system. The theory of transport in the absence of an external magnetic field is discussed in detail, along with the role of disorder studied in various theoretical models. We highlight the differences and similarities between monolayer and bilayer graphene, and focus on thermodynamic properties such as the compressibility, the plasmon spectra, the weak localization correction, quantum Hall effect, and optical properties. Confinement of electrons in graphene is nontrivial due to Klein tunneling. We review various theoretical and experimental studies of quantum confined structures made from graphene. The band structure of graphene nanoribbons and the role of the sublattice symmetry, edge geometry and the size of the nanoribbon on the electronic and magnetic properties are very active areas of research, and a detailed review of these topics is presented. Also, the effects of substrate interactions, adsorbed atoms, lattice defects and doping on the band structure of finite-sized graphene systems are discussed. We also include a brief description of graphane -- gapped material obtained from graphene by attaching hydrogen atoms to each carbon atom in the lattice.Comment: 189 pages. submitted in Advances in Physic

    Immunogenic Profiling in Mice of a HIV/AIDS Vaccine Candidate (MVA-B) Expressing Four HIV-1 Antigens and Potentiation by Specific Gene Deletions

    Get PDF
    BACKGROUND: The immune parameters of HIV/AIDS vaccine candidates that might be relevant in protection against HIV-1 infection are still undefined. The highly attenuated poxvirus strain MVA is one of the most promising vectors to be use as HIV-1 vaccine. We have previously described a recombinant MVA expressing HIV-1 Env, Gag, Pol and Nef antigens from clade B (referred as MVA-B), that induced HIV-1-specific immune responses in different animal models and gene signatures in human dendritic cells (DCs) with immunoregulatory function. METHODOLOGY/PRINCIPAL FINDINGS: In an effort to characterize in more detail the immunogenic profile of MVA-B and to improve its immunogenicity we have generated a new vector lacking two genes (A41L and B16R), known to counteract host immune responses by blocking the action of CC-chemokines and of interleukin 1beta, respectively (referred as MVA-B DeltaA41L/DeltaB16R). A DNA prime/MVA boost immunization protocol was used to compare the adaptive and memory HIV-1 specific immune responses induced in mice by the parental MVA-B and by the double deletion mutant MVA-B DeltaA41L/DeltaB16R. Flow cytometry analysis revealed that both vectors triggered HIV-1-specific CD4(+) and CD8(+) T cells, with the CD8(+) T-cell compartment responsible for >91.9% of the total HIV-1 responses in both immunization groups. However, MVA-B DeltaA41L/DeltaB16R enhanced the magnitude and polyfunctionality of the HIV-1-specific CD4(+) and CD8(+) T-cell immune responses. HIV-1-specific CD4(+) T-cell responses were polyfunctional and preferentially Env-specific in both immunization groups. Significantly, while MVA-B induced preferentially Env-specific CD8(+) T-cell responses, MVA-B DeltaA41L/DeltaB16R induced more GPN-specific CD8(+) T-cell responses, with an enhanced polyfunctional pattern. Both vectors were capable of producing similar levels of antibodies against Env. CONCLUSIONS/SIGNIFICANCE: These findings revealed that MVA-B and MVA-B DeltaA41L/DeltaB16R induced in mice robust, polyfunctional and durable T-cell responses to HIV-1 antigens, but the double deletion mutant showed enhanced magnitude and quality of HIV-1 adaptive and memory responses. Our observations are relevant in the immune evaluation of MVA-B and on improvements of MVA vectors as HIV-1 vaccines

    Euclid:Testing the Copernican principle with next-generation surveys

    Get PDF
    The Copernican principle, the notion that we are not at a special location in the Universe, is one of the cornerstones of modern cosmology and its violation would invalidate the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, causing a major change in our understanding of the Universe. Thus, it is of fundamental importance to perform observational tests of this principle. We determine the precision with which future surveys will be able to test the Copernican principle and their ability to detect any possible violations. We forecast constraints on the inhomogeneous Lemaître-Tolman-Bondi model with a cosmological constant Λ\Lambda (Λ\LambdaLTB), basically a cosmological constant Λ\Lambda and cold dark matter (Λ\LambdaCDM) model, but endowed with a spherical inhomogeneity. We consider combinations of currently available data and simulated Euclid data, together with external data products, based on both Λ\LambdaCDM and Λ\LambdaLTB fiducial models. These constraints are compared to the expectations from the Copernican principle. When considering the Λ\LambdaCDM fiducial model, we find that Euclid data, in combination with other current and forthcoming surveys, will improve the constraints on the Copernican principle by about 30%30\%, with ±10%\pm10\% variations depending on the observables and scales considered. On the other hand, when considering a Λ\LambdaLTB fiducial model, we find that future Euclid data, combined with other current and forthcoming data sets, will be able to detect Gpc-scale inhomogeneities of contrast 0.1-0.1. Next-generation surveys, such as Euclid, will thoroughly test homogeneity at large scales, tightening the constraints on possible violations of the Copernican principle
    corecore